Improving boosting methods with a stable loss function handling outliers

نویسندگان

چکیده

In classification problems, the occurrence of abnormal observations is often encountered. How to obtain a stable model deal with outliers has always been subject widespread concern. this article, we draw on ideas AdaBoosting algorithm and propose asymptotically linear loss function, which makes output function more for contaminated samples, two boosting algorithms were designed based different way updating, handle outliers. addition, skill overcoming instability Newton’s method when dealing weak convexity introduced. Several where artificially added, show that Discrete L-AdaBoost Real Algorithms find boundary each category consistently under condition data contaminated. Extensive real-world dataset experiments are used test robustness proposed noise.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Boosting in the presence of outliers: adaptive classification with non-convex loss functions

This paper examines the role and efficiency of the non-convex loss functions for binary classification problems. In particular, we investigate how to design a simple and effective boosting algorithm that is robust to the outliers in the data. The analysis of the role of a particular non-convex loss for prediction accuracy varies depending on the diminishing tail properties of the gradient of th...

متن کامل

The Most Robust Loss Function for Boosting

Boosting algorithm is understood as the gradient descent algorithm of a loss function. It is often pointed out that the typical boosting algorithm, Adaboost, is seriously affected by the outliers. In this paper, loss functions for robust boosting are studied. Based on a concept of the robust statistics, we propose a positive-part-truncation of the loss function which makes the boosting algorith...

متن کامل

Improving Tree Decomposition Methods With Function Filtering

Iterative MCTEf IMCTE φ(u, v) = M (v, u) the approximated MCTEf message of a previous execution (one with minor r) procedure IMCTEf(〈X,D,C, k〉, 〈〈V,E〉, χ, ψ〉) INPUT: P = 〈X,D,C, k〉 is a WCSP instance TD = 〈〈V,E〉, χ, ψ〉 is a tree decomposition. 1 for each (u, v) ∈ E do φ(u, v) := {∅}; 2 r := 1; 3 repeat 4 MCTEf(r); 5 for each (u, v) ∈ E do φ(u, v) :=M(u,v); 6 r := r + 1; 7 until exact solution o...

متن کامل

Forecasting the Tehran Stock market by Machine ‎Learning Methods using a New Loss Function

Stock market forecasting has attracted so many researchers and investors that ‎many studies have been done in this field. These studies have led to the ‎development of many predictive methods, the most widely used of which are ‎machine learning-based methods. In machine learning-based methods, loss ‎function has a key role in determining the model weights. In this study a new loss ‎function is ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: International Journal of Machine Learning and Cybernetics

سال: 2023

ISSN: ['1868-8071', '1868-808X']

DOI: https://doi.org/10.1007/s13042-022-01766-6